GrooveNet: Real-Time Music-Driven Dance Movement Generation using Artificial Neural Networks
https://youtu.be/6q1I6ucudNw
Author
Abstract
We present the preliminary results of GrooveNet, a generative system that learns to synthesize dance movements for a given audio track in real-time. Our intended application for GrooveNet is a public interactive installation in which the audience can provide their own music to interact with an avatar. We investigate training artificial neural networks, in particular, Factored Conditional Restricted Boltzmann Machines (FCRBM) and Recurrent Neural Networks (RNN), on a small dataset of four synchronized music and motion capture recordings of dance movements that we have captured for this project. Our initial results show that we can train the FCRBM on this small dataset to generate dance movements. However, the model cannot generalize well to music tracks beyond the training data. We outline our plans to further develop GrooveNet.
Source
the Workshop on Machine Learning for Creativity, 23rd ACM SIGKDD Conference on Knowledge Discovery and Data Mining. Halifax, Nova Scotia - Canada. 2017.
Comments
音楽に適したダンス動作自動生成
URL
Tag